Alright, thus far, we have only used Queen neighborhood matrices with our data. Let’s use this exercise to try out different variations. First of all, run the code below in order to compile the data that were also used in the lecture.

## Reading layer `OGRGeoJSON' from data source 
##   `https://geoportal.stadt-koeln.de/arcgis/rest/services/Basiskarten/kgg/MapServer/20/query?where=objectid+is+not+null&text=&objectIds=&time=&geometry=&geometryType=esriGeometryEnvelope&inSR=&spatialRel=esriSpatialRelIntersects&distance=&units=esriSRUnit_Foot&relationParam=&outFields=*&returnGeometry=true&returnTrueCurves=false&maxAllowableOffset=&geometryPrecision=&outSR=4326&havingClause=&returnIdsOnly=false&returnCountOnly=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&returnZ=false&returnM=false&gdbVersion=&historicMoment=&returnDistinctValues=false&resultOffset=&resultRecordCount=&returnExtentOnly=false&datumTransformation=&parameterValues=&rangeValues=&quantizationParameters=&featureEncoding=esriDefault&f=geojson' 
##   using driver `GeoJSON'
## Simple feature collection with 543 features and 20 fields
## Geometry type: MULTIPOLYGON
## Dimension:     XY
## Bounding box:  xmin: 6.77253 ymin: 50.83045 xmax: 7.162028 ymax: 51.08496
## Geodetic CRS:  WGS 84
## ℹ Using "','" as decimal and "'.'" as grouping mark. Use `read_delim()` for more control.
## Rows: 949 Columns: 79
## ── Column specification ──────────────────────────────────────────────────────────────────────────────────────────
## Delimiter: ";"
## chr  (3): wahl, ags, gebiet-name
## dbl (71): gebiet-nr, max-schnellmeldungen, anz-schnellmeldungen, A1, A2, A3, A, B, B1, C, D, E, F, D1, F1, D2,...
## lgl  (4): D30, F30, D31, F31
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
voting_districts <-
  glue::glue(
    "https://geoportal.stadt-koeln.de/arcgis/rest/services/Basiskarten/kgg/\\
    MapServer/20/query?where=objectid+is+not+null&text=&objectIds=&time=&\\
    geometry=&geometryType=esriGeometryEnvelope&inSR=&spatialRel=\\
    esriSpatialRelIntersects&distance=&units=esriSRUnit_Foot&relationParam=&\\
    outFields=*&returnGeometry=true&returnTrueCurves=false&maxAllowableOffset=\\
    &geometryPrecision=&outSR=4326&havingClause=&returnIdsOnly=false&return\\
    CountOnly=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=\\
    &returnZ=false&returnM=false&gdbVersion=&historicMoment=&returnDistinct\\
    Values=false&resultOffset=&resultRecordCount=&returnExtentOnly=false&datum\\
    Transformation=&parameterValues=&rangeValues=&quantizationParameters=&\\
    featureEncoding=esriDefault&f=geojson"
  ) %>% 
  sf::st_read(as_tibble = TRUE) %>%
  sf::st_transform(3035) %>% 
  dplyr::transmute(Stimmbezirk = as.numeric(nummer))

afd_votes <-
  glue::glue(
    "https://www.stadt-koeln.de/wahlen/bundestagswahl/09-2021/praesentation/\\
    Open-Data-Bundestagswahl476.csv"
  ) %>% 
  readr::read_csv2() %>%
  dplyr::transmute(Stimmbezirk = `gebiet-nr`, afd_share = (F1 / F) * 100)

election_results <-
  dplyr::left_join(
    voting_districts,
    afd_votes,
    by = "Stimmbezirk"
  )

immigrants_cologne <-
  z11::z11_get_100m_attribute(STAATSANGE_KURZ_2) %>%
  terra::crop(election_results) %>%
  terra::mask(terra::vect(election_results))


inhabitants_cologne <-
  z11::z11_get_100m_attribute(Einwohner) %>%
  terra::crop(election_results) %>%
  terra::mask(terra::vect(election_results))

immigrant_share_cologne <-
  (immigrants_cologne / inhabitants_cologne) * 100

election_results <-
  election_results %>%
  dplyr::mutate(
    immigrant_share = 
      exactextractr::exact_extract(immigrant_share_cologne, ., 'mean', progress = FALSE),
    inhabitants = 
      exactextractr::exact_extract(inhabitants_cologne, ., 'mean', progress = FALSE)
  )

1

As in the lecture, create a neighborhood (weight) matrix, but this time do it for Queen and Rook neighborhoods. Also, apply a row-normalization.
You could either use the sdep package with its function spdep::poly2nb() or the more modern approach of the sfdep package using the function sfdep::st_contiguity(). In both cases, for Rook neighborhoods, you have to set the option queen = FALSE.

2

We have not used them, but you can also create distance-based weight matrices. Use again the package of your choice and create weights for a distance between 0 and 5000 meters. Use again row-normalization.

For the purpose of this exercise, you also have to convert the polygon data to point coordinates. I’d propose to use the centroids for this task:

election_results_centroids <- sf::st_centroid(election_results)

Use a map to corroborate this conversion was successful.
If you use spdep use the function spdep::dnearneigh() and if you use sfdep use the function sfdep::st_dist_band().

2

Now let’s see how all of these different spatial weights perform in an actual analysis. Calculate Moran’s I and Geary’s C for each one of the weights and report their results for the variable afd_share.
Now it is really important which path you have taken before – using spdep and sfdep – as it determines the way how you solve this exercise.